life institute
Harry and Meghan join AI pioneers in call for ban on superintelligent systems
The statement signed by Harry and Meghan was organised by the Future of Life Institute, a US-based AI safety group. The statement signed by Harry and Meghan was organised by the Future of Life Institute, a US-based AI safety group. Nobel laureates also sign letter saying ASI technology should be barred until there is consensus that it can be developed'safely' The Duke and Duchess of Sussex have joined artificial intelligence pioneers and Nobel laureates in calling for a ban on developing superintelligent AI systems . Harry and Meghan are among the signatories of a statement calling for "a prohibition on the development of superintelligence". Artificial superintelligence (ASI) is the term for AI systems, yet to be developed, that exceed human levels of intelligence at all cognitive tasks.
- North America > United States (0.75)
- Europe > United Kingdom (0.52)
- Europe > Ukraine (0.07)
- Oceania > Australia (0.05)
- Information Technology > Communications > Social Media (0.78)
- Information Technology > Artificial Intelligence > History (0.72)
Which AI Companies Are the Safest--and Least Safe?
As companies race to build more powerful AI, safety measures are being left behind. A report published Wednesday takes a closer look at how companies including OpenAI and Google DeepMind are grappling with the potential harms of their technology. It paints a worrying picture: flagship models from all the developers in the report were found to have vulnerabilities, and some companies have taken steps to enhance safety, others lag dangerously behind. The report was published by the Future of Life Institute, a nonprofit that aims to reduce global catastrophic risks. The organization's 2023 open letter calling for a pause on large-scale AI model training drew unprecedented support from 30,000 signatories, including some of technology's most prominent voices.
- North America > Canada > Quebec > Montreal (0.17)
- North America > United States > California > Alameda County > Berkeley (0.05)
- Asia > South Korea > Seoul > Seoul (0.05)
The Drum
The letter, titled "Pause giant AI experiments" and published on March 22 by the Future of Life Institute, a nonprofit organization concerned with mitigating existential risks facing humanity – and especially such risks associated with AI – makes the case that AI research is outpacing our ability to implement protective guardrails. Leading AI companies have become engaged in a dangerous arms race, the authors of the open letter argue, inexorably leading the world to "ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control." The letter urges caution, as opposed to a careless march into a potentially dangerous future: "powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the authors write. The letter also entreats "all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4." As of Monday morning, the letter has received more than 3,100 signatures.
- North America > Canada (0.05)
- Europe > Italy (0.05)
- Europe > Estonia > Harju County > Tallinn (0.05)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.75)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.74)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.31)
The pause AI movement is remarkable, but won't work
The open letter calling for an immediate six-month pause in the AI development arms race and signed by more than 1600 tech luminaries, researchers and responsible technology advocates under the umbrella of the Future of Life Institute is stunning on its face. Self-reflection and caution have never been defining qualities of technology sector leaders. Outside of nuclear technology, it's hard to identify another time when so many have publicly rallied to slow the pace of technology development down, much less call for government regulation and intervention. "Advanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources," the letter states. "Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control. "Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than (Open AI's) GPT-4.
Elon Musk and experts say AI development should be paused immediately
Elon Musk and a group of artificial intelligence experts are calling for a pause in the training of powerful AI systems due to the potential risks to society and humanity. The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people, warned of potential risks to society and civilisation by human-competitive AI systems in the form of economic and political disruptions. "AI systems with human-competitive intelligence can pose profound risks to society and humanity," the letter warns. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable." It called for a six-month halt to the "dangerous race" to develop systems more powerful than OpenAI's newly launched GPT-4.
- Europe (0.18)
- North America > United States > California (0.06)
- Law (0.78)
- Government (0.54)
- Information Technology > Security & Privacy (0.34)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.78)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.40)
The Real Reason Elon Musk Wants To Pause AI Development
Elon Musk signed an open letter on Tuesday calling for a six-month pause in the development of artificial intelligence tools like OpenAI's ChatGPT, a chatbot that's become incredibly popular since it was first made public in November. And while Musk may insist it's all about making sure the technology is safe, there's likely a much easier explanation: Musk is no longer involved in OpenAI and is frustrated he doesn't have his own version of ChatGPT yet. OpenAI was founded as a nonprofit in 2015, with Elon Musk as the public face of the organization. An article from Wired in early 2016 showed a photo of Musk with his arms crossed, giving the impression he was ready to revolutionize yet another industry. But the story behind Musk's departure from OpenAI is a interesting one, and seems like a much more logical explanation for why the billionaire CEO of several high-tech companies wants to hamper development at OpenAI.
- Information Technology (0.36)
- Government (0.31)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Elon Musk, Apple co-founder, other tech experts call for pause on 'giant AI experiments': 'Dangerous race'
As more companies rush to implement AI solutions and software, a growing number of experts are warning that it could result in an explosion of'fake news' and misinformation. Elon Musk, Steve Wozniak, and a host of other tech leaders and artificial intelligence experts are urging AI labs to pause development of powerful new AI systems in an open letter citing potential risks to society. The letter asks AI developers to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." It was issued by the Future of Life Institute and signed by more than 1,000 people, including Musk, who argued that safety protocols need to be developed by independent overseers to guide the future of AI systems. GPT-4 is the latest deep learning model from OpenAI, which "exhibits human-level performance on various professional and academic benchmarks," according to the lab.
- North America > United States > New York (0.06)
- North America > United States > California (0.06)
Benefits & Risks of Artificial Intelligence - Future of Life Institute
"Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before - as long as we manage to keep the technology beneficial." From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google's search algorithms to IBM's Watson to autonomous weapons. Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. However, the long-term goal of many researchers is to create general AI (AGI or strong AI).
- Information Technology > Robotics & Automation (0.49)
- Energy > Power Industry (0.47)
- Transportation > Passenger (0.35)
- Transportation > Ground > Road (0.35)
FLI May 2022 Newsletter - Future of Life Institute
Ryan Fedasiuk expressed, in Foreign Policy, his strong sense that the United States and China'take steps' towards mitigating the escalatory risks posed by AI accidents. He noted that'Even with perfect info and ideal operating circumstances, AI systems break easily and perform in ways contrary to their intended function'. This is already true of'racially biased hiring decisions'; it may be disastrous in AI weapons systems. The piece also discussed the lack of trust between China and the US on the testing and evaluation of their military AI systems. To improve diplomatic negotiations around AI safety, the article recommended three steps: 1. Clarify their current AI processes and principles.
- Asia > China (0.56)
- North America > United States (0.42)
- Government > Foreign Policy (0.62)
- Government > Military (0.42)
FLI March 2022 Newsletter - Future of Life Institute
This Interesting Engineering piece highlights how even an AI built to find'helpful drugs', when tweaked just a little, can find things that are rather less helpful. Collaborations Pharmaceuticals carried out a simple experiment to see what would happen if the AI they had built was slightly altered to look for chemical weapons, rather than medical treatments. According to a paper they published in Nature Machine Intelligence journal, the answer was not particularly reassuring. When reprogrammed to find chemical weapons, the machine learning algorithm found 40,000 possible options in just six hours. These researchers had'spent decades using computers and A.I. to improve human health', yet they admitted, after the experiment, that they had been'naive in thinking about the potential misuse of trade'.